Goto

Collaborating Authors

 fountain pen


What Kind of Writer Is ChatGPT?

The New Yorker

Last spring, a graduate student in social anthropology--let's call him Chris--sat down at his laptop and asked ChatGPT for help with a writing assignment. He pasted a few thousand words, a mix of rough summaries and jotted-down bullet points, into the text box that serves as ChatGPT's interface. "Here's my entire exam," he wrote. "Don't edit it, I will tell you what to do after you've read it." Chris was tackling a difficult paper about perspectivism, which is the anthropological principle that one's perspective inevitably shapes the observations one makes and the knowledge one acquires.


Spurious Features Everywhere -- Large-Scale Detection of Harmful Spurious Features in ImageNet

Neuhaus, Yannic, Augustin, Maximilian, Boreiko, Valentyn, Hein, Matthias

arXiv.org Artificial Intelligence

Spurious Features in Training Data bird feeder graffiti eucalyptus label Benchmark performance of deep learning classifiers alone is not a reliable predictor for the performance of a deployed model. In particular, if the image classifier has picked up spurious features in the training data, its predictions can fail in unexpected ways. In this paper, we develop Hummingbird Freight Car Koala Hard Disc a framework that allows us to systematically identify Images from the web with spurious feature spurious features in large datasets like ImageNet. It is but no class features classified as class below based on our neural PCA components and their visualization. Previous work on spurious features often operates in toy settings or requires costly pixel-wise annotations. In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class. We introduce the novel dataset "Spurious ImageNet" which allows to measure the reliance of any ImageNet classifier on harmful spurious features. Moreover, we introduce SpuFix as a simple mitigation method to reduce the dependence of any ImageNet classifier on previously identified Hummingbird Freight Car Koala Hard Disc harmful spurious features without requiring additional labels Figure 1: Top: Examples of spurious features found via or retraining of the model. We provide code and data our neural PCA components but not in previous study [61].


Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting

Zhang, Xinlu, Li, Shiyang, Yang, Xianjun, Tian, Chenxin, Qin, Yao, Petzold, Linda Ruth

arXiv.org Artificial Intelligence

Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under privacy-restricted scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.


R.U.R. (Rossum's Universal Robots): PROPERTY LIST

#artificialintelligence

R.U.R. (Rossum's Universal Robots), by Karel Capek is part of HackerNoon's Book Blog Post series. You can jump to any chapter in this book here. Box candy. 1 Pad and blotter. 1 Letter opener. 1 Cigarette box. 1 Inkwell stand. 1 Practical buzzer (6 buttons). Off L.: 1 Fountain pen (for Busman). 1 Telephone buzzer. 1 Siren whistle. On Table L.C.: 2 Book ends (wooden).


The Verge's favorite guilty pleasures

#artificialintelligence

We all have stuff that we've bought ourselves -- or asked others to buy for us -- that makes us happy, even if we suspect our friends may not understand why it's so great. It could be a $100-plus coffee cup that keeps your liquid at the exact right temperature. Or a video game that you've been playing for years. Or a hair styler that is way expensive but would make you look fabulous. We asked the staff of The Verge what some of their guilty pleasures are, and the braver among us volunteered some answers. I'm hesitant to call it a "guilty" pleasure because I have used this $550 (or more) GE Opal 2.0 ice machine every day for nearly a full year and not once have I felt guilt about spending such an obscene amount of cash on a kitchen gadget that does exactly one thing.


Artificial Intelligence and the Future of Content

#artificialintelligence

The future of content may be artificial intelligence. But before we look ahead to what's coming, it might be helpful to first look back at where we've been. With that in mind, let me give you a short history lesson about how the tools that we use to create content have evolved over time. I promise to keep it brief. If we look way back in time, the earliest writing implements emerged at around 4000 BC, when people would fashion pieces of bronze or bone into implements that they could use to mark soft clay tablets.